Hello, good morning, welcome to the lecture on interventional medical image processing.
Professor Horneg is not here today, so he asked me to do the lecture for him.
So I'll briefly introduce myself, my name is Jakob Bast, I'm currently a PhD student
at a panel recognition lab.
There I'm working in the medical image registration group and one of the research focuses in this
group is on the establishment of 3D cameras into interventional medical imaging.
And here we have the problem that we have very noisy data, so preprocessing is a very
fundamental step in this application field.
And this is also the talk about this lecture today.
Besides this also problem that you had in your last exercise, you had the problem of
differentiation and here in particular the problem of, or the ill-posedness of differentiation
in the presence of noise.
So here's a short lecture over you today.
First of all I give you motivation in my research field where we have the 3D cameras in interventional
imaging.
And the main part of this lecture is about denoising techniques.
Here we'll discuss the problem, or the concept of normalized convolution and edge preserving
denoising.
Here the biological filter well-known concept and the recent learned to use concept, so
called guided filtering.
At the end of the lecture we'll give some notes on real-time preprocessing on graphics
processing unit.
Because real-time constraints are a crucial factor in interventional imaging, you can
take one minute or several seconds to preprocess the image because the physician wants to have
the image immediately.
Okay, first of all some motivation.
What we are doing, this is some kind of augmented reality or virtual reality scenario.
And you can think about an operating room and the physician or a surgeon standing there.
He has the patient lying on the table cut open and he has to do a liver resection.
So he has the knife in his hand and where he has to do the proper cut, this is the question.
Of course at first you do it, or preoperatively you acquire a CT scan of the organ and you
can visualize the vessel 3D in a 3D manner.
However you have the problem that it's done preoperatively.
So first of all the physician or the surgeon has to transfer this information from this
preoperatively acquired CT data to the actual scenario he is standing.
And besides we have another problem that the organ may deform due to movement or other
things.
So what our solution is here that we register these two images.
First of all we have this preoperatively acquired CT scan and during intervention we take 3D
cameras, we capture the organ surface and we register these two objects.
And by registering and aligning them we can use volume learning techniques to display
the vessel structure so the surgeon is able to see where he has to do the proper cut.
So what we're doing here is a completely automatic and markerless registration scheme.
It's basically using point correspondences that are established across the organ surfaces.
Here you can see these correspondences are found to know by these lines and based on
these correspondences we can basically compute a transformation that brings these two objects
into alignment.
You can see here the alignment of these objects.
We have this white object denoting the preoperatively acquired CT scan and this dark object is the
interoperatively acquired time of light or 3D surface.
Presenters
Dipl.-Inf. Jakob Wasza
Zugänglich über
Offener Zugang
Dauer
01:12:19 Min
Aufnahmedatum
2011-05-16
Hochgeladen am
2011-05-17 12:08:16
Sprache
en-US